Analyzing defenses in team sports is generally challenging because of the limited event data. Researchers have previously proposed methods to evaluate football team defense by predicting the events of ball gain and being attacked using locations of all players and the ball. However, they did not consider the importance of the events, assumed the perfect observation of all 22 players, and did not fully investigated the influence of the diversity (e.g., nationality and sex). Here, we propose a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events. Using the open-source location data of all players in broadcast video frames in football games of men's Euro 2020 and women's Euro 2022, we investigated the effect of the number of players on the prediction and validated our approach by analyzing the games. Results show that for the predictions of being attacked, scoring, and conceding, all players' information was not necessary, while that of ball gain required information on three to four offensive and defensive players. With game analyses we explained the excellence in defense of finalist teams in Euro 2020. Our approach might be applicable to location data from broadcast video frames in football games.
translated by 谷歌翻译
自动故障检测是许多运动的主要挑战。在比赛中,裁判根据规则在视觉上判断缺点。因此,在判断时确保客观性和公平性很重要。为了解决这个问题,一些研究试图使用传感器和机器学习来自动检测故障。但是,与传感器的附件和设备(例如高速摄像头)相关的问题,这些问题与裁判的视觉判断以及故障检测模型的可解释性相抵触。在这项研究中,我们提出了一个用于非接触测量的断层检测系统。我们使用了根据多个合格裁判的判断进行训练的姿势估计和机器学习模型,以实现公平的错误判断。我们使用智能手机视频在包括东京奥运会的奖牌获得者中,使用了正常比赛的智能手机视频,并有意地走路。验证结果表明,所提出的系统的平均准确度超过90%。我们还透露,机器学习模型根据种族步行规则检测到故障。此外,奖牌获得者的故意故障步行运动与大学步行者不同。这一发现符合更通用的故障检测模型的实现。该代码和数据可在https://github.com/szucchini/racewalk-aijudge上获得。
translated by 谷歌翻译
评估足球运动员队友的个人运动对于评估队伍,侦察和粉丝的参与至关重要。据说,在90分钟的比赛中,球员平均没有大约87分钟的球。但是,在不接球的情况下评估进攻球员并揭示运动如何为队友创造得分机会的贡献一直很困难。在本文中,我们评估了通过将实际动作与通过轨迹预测产生的参考运动进行比较来评估创建球外评分机会的玩家。首先,我们使用图形差异神经网络预测玩家的轨迹,该神经网络可以准确地模拟玩家之间的关系并预测长期轨迹。接下来,基于实际运动轨迹和预测轨迹之间修改的外球评估指数的差异,我们评估实际运动与预测运动相比如何促进得分机会。为了进行验证,我们研究了专家一年中专业球队的所有比赛的年薪,目标和比赛的关系。结果表明,年薪和拟议的指标与现有指标和目标无法解释。我们的结果表明,该方法作为没有球的球员为队友创造得分机会的指标的有效性。
translated by 谷歌翻译
这项工作提出了一种自我监督的方法,用于学习密集的语义上丰富的视觉概念嵌入式,用于通过在NLP中学习Word Embeddings的方法启发的图像。我们的方法通过产生更多富有表现力的嵌入来提高现有工作,并通过适用于高分辨率图像。将自然图像的生成作为一种随机过程,其中一组潜在的视觉概念产生可观察像素外观,我们的方法被配制,以从像素到概念的反向映射。我们的方法大大提高了自我监督学习对密集嵌入映射的有效性,通过将超装配作为自然等级从像素从像素向一小组视觉相干区域进行了向上。其他贡献是具有非均匀形状的区域上下文掩蔽,匹配视觉相干的补丁和基于复杂的视图采样,由屏蔽语言模型启发。通过显着改善Coco(+12.94 miou,+87.6 \%)和城市景观(+16.52 miou,+134.2 \%)的最先进的代表性质量基准来证明了我们密集嵌入的增强的表现力。结果表明,未参加工作未能证明的较好的缩放和域泛化性能。
translated by 谷歌翻译
强化学习(RL)的最新进展使得可以在广泛的应用中开发出擅长的复杂剂。使用这种代理商的模拟可以在难以在现实世界中进行科学实验的情景中提供有价值的信息。在本文中,我们研究了足球RL代理商的游戏风格特征,并揭示了在训练期间可能发展的策略。然后将学习的策略与真正的足球运动员进行比较。我们探索通过使用聚合统计和社交网络分析(SNA)来探索使用模拟环境的学习内容。结果,我们发现(1)代理商的竞争力与各种SNA指标之间存在强烈的相关性,并且(2)RL代理商的各个方面,游戏风格与现实世界足球运动员相似,因为代理人变得更具竞争力。我们讨论了可能有必要的进一步进展,以改善我们必须充分利用RL进行足球的分析所需的理解。
translated by 谷歌翻译
Quantum Kernel方法是量子机器学习的关键方法之一,这具有不需要优化的优点,并且具有理论简单。凭借这些属性,到目前为止已经开发了几种实验演示和对潜在优势的讨论。但是,正如古典机器学习所在的情况一样,并非所有量子机器学习模型都可以被视为内核方法。在这项工作中,我们探讨了具有深层参数化量子电路的量子机器学习模型,旨在超出传统量子核法。在这种情况下,预计表示功率和性能将得到增强,而培训过程可能是丢储Plateaus问题的瓶颈。然而,我们发现,在训练期间,深度足够的量子电路的参数不会从其初始值中移动到初始值,从而允许一阶扩展参数。这种行为类似于经典文献中的神经切线内核,并且可以通过另一个紧急内核,量子切线内核来描述这种深度变化量子机器学习。数值模拟表明,所提出的Quantum切线内核优于传统的Quantum核心核对ANSATZ生成的数据集。该工作提供了超出传统量子内核法的新方向,并探讨了用深层参数化量子电路的量子机器学习的潜在力量。
translated by 谷歌翻译
FIG. 1. Schematic diagram of a Variational Quantum Algorithm (VQA). The inputs to a VQA are: a cost function C(θ), with θ a set of parameters that encodes the solution to the problem, an ansatz whose parameters are trained to minimize the cost, and (possibly) a set of training data {ρ k } used during the optimization. Here, the cost can often be expressed in the form in Eq. ( 3), for some set of functions {f k }. Also, the ansatz is shown as a parameterized quantum circuit (on the left), which is analogous to a neural network (also shown schematically on the right). At each iteration of the loop one uses a quantum computer to efficiently estimate the cost (or its gradients). This information is fed into a classical computer that leverages the power of optimizers to navigate the cost landscape C(θ) and solve the optimization problem in Eq. ( 1). Once a termination condition is met, the VQA outputs an estimate of the solution to the problem. The form of the output depends on the precise task at hand. The red box indicates some of the most common types of outputs.
translated by 谷歌翻译
We propose a classical-quantum hybrid algorithm for machine learning on near-term quantum processors, which we call quantum circuit learning. A quantum circuit driven by our framework learns a given task by tuning parameters implemented on it. The iterative optimization of the parameters allows us to circumvent the high-depth circuit. Theoretical investigation shows that a quantum circuit can approximate nonlinear functions, which is further confirmed by numerical simulations. Hybridizing a low-depth quantum circuit and a classical computer for machine learning, the proposed framework paves the way toward applications of near-term quantum devices for quantum machine learning.
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
Transparency of Machine Learning models used for decision support in various industries becomes essential for ensuring their ethical use. To that end, feature attribution methods such as SHAP (SHapley Additive exPlanations) are widely used to explain the predictions of black-box machine learning models to customers and developers. However, a parallel trend has been to train machine learning models in collaboration with other data holders without accessing their data. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning.
translated by 谷歌翻译